Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 758
Filtrar
1.
Perspect Psychol Sci ; : 17456916241242734, 2024 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-38648556

RESUMEN

A recent article in Perspectives on Psychological Science (Webb & Tangney, 2022) reported a study in which just 2.6% of participants recruited on Amazon's Mechanical Turk (MTurk) were deemed "valid." The authors highlighted some well-established limitations of MTurk, but their central claims-that MTurk is "too good to be true" and that it captured "only 14 human beings . . . [out of] N = 529"-are radically misleading, yet have been repeated widely. This commentary aims to (a) correct the record (i.e., by showing that Webb and Tangney's approach to data collection led to unusually low data quality) and (b) offer a shift in perspective for running high-quality studies online. Negative attitudes toward MTurk sometimes reflect a fundamental misunderstanding of what the platform offers and how it should be used in research. Beyond pointing to research that details strategies for effective design and recruitment on MTurk, we stress that MTurk is not suitable for every study. Effective use requires specific expertise and design considerations. Like all tools used in research-from advanced hardware to specialist software-the tool itself places constraints on what one should use it for. Ultimately, high-quality data is the responsibility of the researcher, not the crowdsourcing platform.

2.
J Int Neuropsychol Soc ; : 1-9, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38616725

RESUMEN

OBJECTIVE: Brain areas implicated in semantic memory can be damaged in patients with epilepsy (PWE). However, it is challenging to delineate semantic processing deficits from acoustic, linguistic, and other verbal aspects in current neuropsychological assessments. We developed a new Visual-based Semantic Association Task (ViSAT) to evaluate nonverbal semantic processing in PWE. METHOD: The ViSAT was adapted from similar predecessors (Pyramids & Palm Trees test, PPT; Camels & Cactus Test, CCT) comprised of 100 unique trials using real-life color pictures that avoid demographic, cultural, and other potential confounds. We obtained performance data from 23 PWE participants and 24 control participants (Control), along with crowdsourced normative data from 54 Amazon Mechanical Turk (Mturk) workers. RESULTS: ViSAT reached a consensus >90% in 91.3% of trials compared to 83.6% in PPT and 82.9% in CCT. A deep learning model demonstrated that visual features of the stimulus images (color, shape; i.e., non-semantic) did not influence top answer choices (p = 0.577). The PWE group had lower accuracy than the Control group (p = 0.019). PWE had longer response times than the Control group in general and this was augmented for the semantic processing (trial answer) stage (both p < 0.001). CONCLUSIONS: This study demonstrated performance impairments in PWE that may reflect dysfunction of nonverbal semantic memory circuits, such as seizure onset zones overlapping with key semantic regions (e.g., anterior temporal lobe). The ViSAT paradigm avoids confounds, is repeatable/longitudinal, captures behavioral data, and is open-source, thus we propose it as a strong alternative for clinical and research assessment of nonverbal semantic memory.

3.
J Med Internet Res ; 26: e51138, 2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38602750

RESUMEN

Modern machine learning approaches have led to performant diagnostic models for a variety of health conditions. Several machine learning approaches, such as decision trees and deep neural networks, can, in principle, approximate any function. However, this power can be considered to be both a gift and a curse, as the propensity toward overfitting is magnified when the input data are heterogeneous and high dimensional and the output class is highly nonlinear. This issue can especially plague diagnostic systems that predict behavioral and psychiatric conditions that are diagnosed with subjective criteria. An emerging solution to this issue is crowdsourcing, where crowd workers are paid to annotate complex behavioral features in return for monetary compensation or a gamified experience. These labels can then be used to derive a diagnosis, either directly or by using the labels as inputs to a diagnostic machine learning model. This viewpoint describes existing work in this emerging field and discusses ongoing challenges and opportunities with crowd-powered diagnostic systems, a nascent field of study. With the correct considerations, the addition of crowdsourcing to human-in-the-loop machine learning workflows for the prediction of complex and nuanced health conditions can accelerate screening, diagnostics, and ultimately access to care.


Asunto(s)
Colaboración de las Masas , Trastornos Mentales , Humanos , Medicina de Precisión , Flujo de Trabajo , Aprendizaje Automático
4.
Sensors (Basel) ; 24(5)2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38475044

RESUMEN

Remote sensing images change detection technology has become a popular tool for monitoring the change type, area, and distribution of land cover, including cultivated land, forest land, photovoltaic, roads, and buildings. However, traditional methods which rely on pre-annotation and on-site verification are time-consuming and challenging to meet timeliness requirements. With the emergence of artificial intelligence, this paper proposes an automatic change detection model and a crowdsourcing collaborative framework. The framework uses human-in-the-loop technology and an active learning approach to transform the manual interpretation method into a human-machine collaborative intelligent interpretation method. This low-cost and high-efficiency framework aims to solve the problem of weak model generalization caused by the lack of annotated data in change detection. The proposed framework can effectively incorporate expert domain knowledge and reduce the cost of data annotation while improving model performance. To ensure data quality, a crowdsourcing quality control model is constructed to evaluate the annotation qualification of the annotators and check their annotation results. Furthermore, a prototype of automatic detection and crowdsourcing collaborative annotation management platform is developed, which integrates annotation, crowdsourcing quality control, and change detection applications. The proposed framework and platform can help natural resource departments monitor land cover changes efficiently and effectively.

5.
JMIR Public Health Surveill ; 10: e52093, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38488832

RESUMEN

BACKGROUND: The proliferation of digital disease-detection systems has led to an increase in earlier warning signals, which subsequently have resulted in swifter responses to emerging threats. Such highly sensitive systems can also produce weak signals needing additional information for action. The delays in the response to a genuine health threat are often due to the time it takes to verify a health event. It was the delay in outbreak verification that was the main impetus for creating EpiCore. OBJECTIVE: This paper describes the potential of crowdsourcing information through EpiCore, a network of voluntary human, animal, and environmental health professionals supporting the verification of early warning signals of potential outbreaks and informing risk assessments by monitoring ongoing threats. METHODS: This paper uses summary statistics to assess whether EpiCore is meeting its goal to accelerate the time to verification of identified potential health events for epidemic and pandemic intelligence purposes from around the world. Data from the EpiCore platform from January 2018 to December 2022 were analyzed to capture request for information response rates and verification rates. Illustrated use cases are provided to describe how EpiCore members provide information to facilitate the verification of early warning signals of potential outbreaks and for the monitoring and risk assessment of ongoing threats through EpiCore and its utilities. RESULTS: Since its launch in 2016, EpiCore network membership grew to over 3300 individuals during the first 2 years, consisting of professionals in human, animal, and environmental health, spanning 161 countries. The overall EpiCore response rate to requests for information increased by year between 2018 and 2022 from 65.4% to 68.8% with an initial response typically received within 24 hours (in 2022, 94% of responded requests received a first contribution within 24 h). Five illustrated use cases highlight the various uses of EpiCore. CONCLUSIONS: As the global demand for data to facilitate disease prevention and control continues to grow, it will be crucial for traditional and nontraditional methods of disease surveillance to work together to ensure health threats are captured earlier. EpiCore is an innovative approach that can support health authorities in decision-making when used complementarily with official early detection and verification systems. EpiCore can shorten the time to verification by confirming early detection signals, informing risk-assessment activities, and monitoring ongoing events.


Asunto(s)
Brotes de Enfermedades , Personal de Salud , Animales , Humanos , Brotes de Enfermedades/prevención & control , Pandemias
6.
Sci Total Environ ; 926: 171932, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38522527

RESUMEN

Per- and polyfluoroalkyl substances (PFAS) are a class of persistent chemicals that have been associated with a diverse array of adverse environmental and human health related effects. In addition to a growing list of health concerns, PFAS are also ubiquitously used and pervasive in our natural and built environments, and they have an innate ability to be highly mobile once released into the environment with an unmatched ability to resist degradation. As such, PFAS have been detected in a wide variety of environmental matrices, including soil, water, and biota; however, the matrix that largely dictates human exposure to PFAS is drinking water, in large part due to their abundance in water sources and our reliance on drinking water. As Florida is heavily reliant upon water and its varying sources, the primary objective of this study was to survey the presence of PFAS in drinking water collected from taps from the state of Florida (United States). In this study, 448 drinking water samples were collected by networking with trained citizen scientists, with at least one sample collected from each of the 67 counties in Florida. Well water, tap water, and bottled water, all sourced from Florida, were extracted and analyzed (31 PFAS) using isotope dilution and ultra-high-performance liquid chromatography - tandem mass spectrometry (UHPLC-MS/MS). Overall, when examining ∑PFAS: the minimum, maximum, median, and mean were ND, 219, 2.90, and 14.06 ng/L, respectively. The data herein allowed for a comparison of PFAS in drinking water geographically within the state of Florida, providing vital baseline concentrations for prospective monitoring and highlighting hotspots that require additional testing and mitigation. By incorporating citizen scientists into the study, we aimed to educate impacted communities regarding water quality issues and solutions.


Asunto(s)
Ácidos Alcanesulfónicos , Colaboración de las Masas , Agua Potable , Fluorocarburos , Contaminantes Químicos del Agua , Humanos , Florida , Estudios Prospectivos , Espectrometría de Masas en Tándem , Fluorocarburos/análisis , Contaminantes Químicos del Agua/análisis , Ácidos Alcanesulfónicos/análisis
7.
Conserv Biol ; : e14257, 2024 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-38545678

RESUMEN

The expanding use of community science platforms has led to an exponential increase in biodiversity data in global repositories. Yet, understanding of species distributions remains patchy. Biodiversity data from social media can potentially reduce the global biodiversity knowledge gap. However, practical guidelines and standardized methods for harvesting such data are nonexistent. Following data privacy and protection safeguards, we devised a standardized method for extracting species distribution records from Facebook groups that allow access to their data. It involves 3 steps: group selection, data extraction, and georeferencing the record location. We present how to structure keywords, search for species photographs, and georeference localities for such records. We further highlight some challenges users might face when extracting species distribution data from Facebook and suggest solutions. Following our proposed framework, we present a case study on Bangladesh's biodiversity-a tropical megadiverse South Asian country. We scraped nearly 45,000 unique georeferenced records across 967 species and found a median of 27 records per species. About 12% of the distribution data were for threatened species, representing 27% of all species. We also obtained data for 56 DataDeficient species for Bangladesh. If carefully harvested, social media data can significantly reduce global biodiversity knowledge gaps. Consequently, developing an automated tool to extract and interpret social media biodiversity data is a research priority.


Un protocolo para recolectar datos sobre biodiversidad en Facebook Resumen El uso creciente de plataformas de ciencia comunitaria ha causado un incremento exponencial de los datos sobre biodiversidad en los repositorios mundiales. Sin embargo, el conocimiento sobre la distribución de las especies todavía está incompleto. Los datos sobre biodiversidad obtenidos de las redes sociales tienen el potencial para disminuir el vacío de conocimiento sobre la biodiversidad mundial. No obstante, no existe una guía práctica o un método estandarizado para recolectar dichos datos. Seguimos los protocolos de privacidad y protección de datos para diseñar un método estandarizado para extraer registros de la distribución de especies de grupos en Facebook que permiten el acceso a sus datos. El método consta de tres pasos: selección del grupo, extracción de datos y georreferenciación de la localidad registrada. También planteamos cómo estructurar las palabras clave, buscar fotografías de especies y georreferenciar las localidades de dichos registros. Además, resaltamos algunos retos que los usuarios pueden enfrentar al extraer los datos de distribución de Facebook y sugerimos algunas soluciones. Aplicamos nuestro marco de trabajo propuesto a un estudio de caso de la biodiversidad en Bangladesh, un país tropical megadiverso en el sureste de Asia. Reunimos casi 45,000 registros georreferenciados únicos para 967 especies y encontramos una media de 27 registros por especie. Casi el 12% de los datos de distribución correspondió a especies amenazadas, que representaban el 27% de todas las especies. También obtuvimos datos para 56 especies deficientes de datos en Bangladesh. Si los datos de las redes sociales se recolectan con cuidado, éstos pueden reducir de forma significativa el vacío de conocimiento para la biodiversidad mundial. Como consecuencia, es una prioridad para la investigación el desarrollo de una herramienta automatizada para extraer e interpretar los datos sobre biodiversidad de las redes sociales.

8.
Heliyon ; 10(5): e26881, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38434368

RESUMEN

The quality of a crowdsourcing virtual community is an essential factor that stimulates users' perceptions of belonging and attachment to the community, thereby influencing their behavior. As a prerequisite for the development of "creative crowdsourcing," it is particularly important to study how users' voice behavior can be promoted in virtual communities. Drawing on the Stimulus-Organism-Response (SOR) framework and the Social Identification Theory, this study developed a conceptual model that investigates the impact of crowdsourcing virtual communities in system, information, interaction, and service quality on users' voice behavior. Furthermore, we introduce community identification and self-disclosure to further analyze the influencing mechanism between these two variables. Data were collected through 672 survey questionnaires from participants in well-known crowdsourcing virtual communities such as Xiaohongshu, Bilibili, Haier Hope, Test Baidu, and Test China. Using hierarchical regression and bootstrap analysis, we found a positive correlation between the quality of the crowdsourcing virtual community and users' voice behavior, with community identification acting as a mediator. Furthermore, self-disclosure showed a significant moderating effect on the relationship between community identification and voice behavior. These findings significantly contribute to the theoretical landscape by advancing the SOR framework within a virtual community. This not only deepens the understanding of the quality of the crowdsourcing virtual community, but also provides theoretical and practical implications for managers and users on how to promote voice behavior.

9.
Perspect Psychol Sci ; : 17456916241234328, 2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38451252

RESUMEN

In response to Webb and Tangney (2022) we call into question the conclusion that data collected on Amazon's Mechanical Turk (MTurk) was "at best-only 2.6% valid" (p. 1). We suggest that Webb and Tangney made certain choices during the study-design and data-collection process that adversely affected the quality of the data collected. As a result, the anecdotal experience of these authors provides weak evidence that MTurk provides low-quality data as implied. In our commentary we highlight best practice recommendations and make suggestions for more effectively collecting and screening online panel data.

10.
Angew Chem Int Ed Engl ; 63(13): e202317338, 2024 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-38391056

RESUMEN

For five years now, Merck KGaA, Darmstadt, Germany has hosted The Compound Challenge-a global retrosynthesis competition. When the event kicked off in 2018 on the occasion of the 350th anniversary of the company, no one could have predicted the path it would take-from a novel competition to a pivotal event within the synthetic chemistry community. But what makes the Compound Challenge tick and what drives its popularity? And, more importantly, what lessons can be taken from the Compound Challenge and applied to other challenges in scientific education and outreach? In this Viewpoint Article we will tell the story of the Compound Challenge, from its inception to its current status. Through examining feedback following each of its iterations, we begin to define what makes an open innovation challenge so compelling. It is our hope that educators, leaders, and innovators will be able to learn from our successes as well as our mistakes and apply these lessons to their future outreach activities.

11.
Glob Chang Biol ; 30(2): e17167, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38348640

RESUMEN

Land use intensification favours particular trophic groups which can induce architectural changes in food webs. These changes can impact ecosystem functions, services, stability and resilience. However, the imprint of land management intensity on food-web architecture has rarely been characterized across large spatial extent and various land uses. We investigated the influence of land management intensity on six facets of food-web architecture, namely apex and basal species proportions, connectance, omnivory, trophic chain lengths and compartmentalization, for 67,051 European terrestrial vertebrate communities. We also assessed the dependency of this influence of intensification on land use and climate. In addition to more commonly considered climatic factors, the architecture of food webs was notably influenced by land use and management intensity. Intensification tended to strongly lower the proportion of apex predators consistently across contexts. In general, intensification also tended to lower proportions of basal species, favoured mesopredators, decreased food webs compartmentalization whereas it increased their connectance. However, the response of food webs to intensification was different for some contexts. Intensification sharply decreased connectance in Mediterranean and Alpine settlements, and it increased basal tetrapod proportions and compartmentalization in Mediterranean forest and Atlantic croplands. Besides, intensive urbanization especially favoured longer trophic chains and lower omnivory. By favouring mesopredators in most contexts, intensification could undermine basal tetrapods, the cascading effects of which need to be assessed. Our results support the importance of protecting top predators where possible and raise questions about the long-term stability of food webs in the face of human-induced pressures.


Asunto(s)
Ecosistema , Cadena Alimentaria , Animales , Humanos , Vertebrados/fisiología , Bosques , Clima
12.
Comput Electron Agric ; 217: None, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38343602

RESUMEN

Experimental citizen science offers new ways to organize on-farm testing of crop varieties and other agronomic options. Its implementation at scale requires software that streamlines the process of experimental design, data collection and analysis, so that different organizations can support trials. This article considers ClimMob software developed to facilitate implementing experimental citizen science in agriculture. We describe the software design process, including our initial design choices, the architecture and functionality of ClimMob, and the methodology used for incorporating user feedback. Initial design choices were guided by the need to shape a workflow that is feasible for farmers and relevant for farmers, breeders and other decision-makers. Workflow and software concepts were developed concurrently. The resulting approach supported by ClimMob is triadic comparisons of technology options (tricot), which allows farmers to make simple comparisons between crop varieties or other agricultural technologies tested on farms. The software was built using Component-Based Software Engineering (CBSE), to allow for a flexible, modular design of software that is easy to maintain. Source is open-source and built on existing components that generally have a broad user community, to ensure their continuity in the future. Key components include Open Data Kit, ODK Tools, PyUtilib Component Architecture. The design of experiments and data analysis is done through R packages, which are all available on CRAN. Constant user feedback and short communication lines between the development teams and users was crucial in the development process. Development will continue to further improve user experience, expand data collection methods and media channels, ensure integration with other systems, and to further improve the support for data-driven decision-making.

13.
JMIR Res Protoc ; 13: e52205, 2024 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-38329783

RESUMEN

BACKGROUND: A considerable number of minors in the United States are diagnosed with developmental or psychiatric conditions, potentially influenced by underdiagnosis factors such as cost, distance, and clinician availability. Despite the potential of digital phenotyping tools with machine learning (ML) approaches to expedite diagnoses and enhance diagnostic services for pediatric psychiatric conditions, existing methods face limitations because they use a limited set of social features for prediction tasks and focus on a single binary prediction, resulting in uncertain accuracies. OBJECTIVE: This study aims to propose the development of a gamified web system for data collection, followed by a fusion of novel crowdsourcing algorithms with ML behavioral feature extraction approaches to simultaneously predict diagnoses of autism spectrum disorder and attention-deficit/hyperactivity disorder in a precise and specific manner. METHODS: The proposed pipeline will consist of (1) gamified web applications to curate videos of social interactions adaptively based on the needs of the diagnostic system, (2) behavioral feature extraction techniques consisting of automated ML methods and novel crowdsourcing algorithms, and (3) the development of ML models that classify several conditions simultaneously and that adaptively request additional information based on uncertainties about the data. RESULTS: A preliminary version of the web interface has been implemented, and a prior feature selection method has highlighted a core set of behavioral features that can be targeted through the proposed gamified approach. CONCLUSIONS: The prospect for high reward stems from the possibility of creating the first artificial intelligence-powered tool that can identify complex social behaviors well enough to distinguish conditions with nuanced differentiators such as autism spectrum disorder and attention-deficit/hyperactivity disorder. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/52205.

14.
Radiol Artif Intell ; 6(1): e230006, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38231037

RESUMEN

In spite of an exponential increase in the volume of medical data produced globally, much of these data are inaccessible to those who might best use them to develop improved health care solutions through the application of advanced analytics such as artificial intelligence. Data liberation and crowdsourcing represent two distinct but interrelated approaches to bridging existing data silos and accelerating the pace of innovation internationally. In this article, we examine these concepts in the context of medical artificial intelligence research, summarizing their potential benefits, identifying potential pitfalls, and ultimately making a case for their expanded use going forward. A practical example of a crowdsourced competition using an international medical imaging dataset is provided. Keywords: Artificial Intelligence, Data Liberation, Crowdsourcing © RSNA, 2023.


Asunto(s)
Investigación Biomédica , Colaboración de las Masas , Holometabola , Animales , Inteligencia Artificial , Instituciones de Salud
15.
Comput Med Imaging Graph ; 112: 102327, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38194768

RESUMEN

Automated semantic segmentation of histopathological images is an essential task in Computational Pathology (CPATH). The main limitation of Deep Learning (DL) to address this task is the scarcity of expert annotations. Crowdsourcing (CR) has emerged as a promising solution to reduce the individual (expert) annotation cost by distributing the labeling effort among a group of (non-expert) annotators. Extracting knowledge in this scenario is challenging, as it involves noisy annotations. Jointly learning the underlying (expert) segmentation and the annotators' expertise is currently a commonly used approach. Unfortunately, this approach is frequently carried out by learning a different neural network for each annotator, which scales poorly when the number of annotators grows. For this reason, this strategy cannot be easily applied to real-world CPATH segmentation. This paper proposes a new family of methods for CR segmentation of histopathological images. Our approach consists of two coupled networks: a segmentation network (for learning the expert segmentation) and an annotator network (for learning the annotators' expertise). We propose to estimate the annotators' behavior with only one network that receives the annotator ID as input, achieving scalability on the number of annotators. Our family is composed of three different models for the annotator network. Within this family, we propose a novel modeling of the annotator network in the CR segmentation literature, which considers the global features of the image. We validate our methods on a real-world dataset of Triple Negative Breast Cancer images labeled by several medical students. Our new CR modeling achieves a Dice coefficient of 0.7827, outperforming the well-known STAPLE (0.7039) and being competitive with the supervised method with expert labels (0.7723). The code is available at https://github.com/wizmik12/CRowd_Seg.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos
16.
Data Brief ; 52: 109976, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38287953

RESUMEN

The dataset contains full-page screenshots of homepages of commercial banking (N = 1033), online shopping (N = 1064), and university (N = 1059) websites, as well as the raw and aggregated user ratings of webpage design prototypicality, visual aesthetics, perceived usability and trustworthiness, and user demographic information. Design prototypicality was measured with three items, including typicality, exemplar goodness, and family resemblance, whereas the other design dimensions were measured with a single item each. Amazon Mechanical Turk crowdworkers (N = 3319 rating sessions) provided their demographic data and rated the homepages online. The demographic data have been anonymized, with generated unique participant IDs replacing MTurk crowdworker IDs. The screenshots are identified with generated IDs to provide partial anonymization for the websites, limiting their potential misuse outside design-related or user experience-related academic research. The raw rating data contain all collected ratings, whereas the aggregated data contain the per-webpage, per-dimension ratings derived solely from the ratings of study-compliant crowdworkers. The non-compliance among crowdworkers was detected based on several indicators, including rate-rerate consistency, seen-unseen webpage recognition, free-form feedback analyses, demographic data analyses, and other indicators. Future research could utilize the dataset either in user studies that require full-page webpages as stimuli, e.g., studies on the determinants of first impression, user preference, and user experience, or in computational research on web design, including computational aesthetics, as this type of research requires a large number of user-rated webpages, which this dataset provides.

17.
Public Underst Sci ; 33(2): 142-157, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37861108

RESUMEN

Citizen science is often celebrated. We interrogate this position through exploration of socio-technoscientific phenomena that mirror citizen science yet are disaligned with its ideals. We term this 'Dark Citizen Science'. We identify five conceptual dimensions of citizen science - purpose, process, perceptibility, power and public effect. Dark citizen science mirrors traditional citizen science in purpose and process but diverges in perceptibility, power and public effect. We compare two Internet-based categorisation processes, Citizen Science project Galaxy Zoo and Dark Citizen Science project Google's reCAPTCHA. We highlight that the reader has, likely unknowingly, provided unpaid technoscientific labour to Google. We apply insights from our analysis of dark citizen science to traditional citizen science. Linking citizen science as practice and normative democratic ideal ignores how some science-citizen configurations actively pit practice against ideal. Further, failure to fully consider the implications of citizen science for science and society allows exploitative elements of citizen science to evade the sociological gaze.


Asunto(s)
Ciencia Ciudadana , Humanos , Participación de la Comunidad
18.
Perspect Psychol Sci ; 19(2): 477-488, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37594056

RESUMEN

Identifying successful approaches for reducing the belief and spread of online misinformation is of great importance. Social media companies currently rely largely on professional fact-checking as their primary mechanism for identifying falsehoods. However, professional fact-checking has notable limitations regarding coverage and speed. In this article, we summarize research suggesting that the "wisdom of crowds" can be harnessed successfully to help identify misinformation at scale. Despite potential concerns about the abilities of laypeople to assess information quality, recent evidence demonstrates that aggregating judgments of groups of laypeople, or crowds, can effectively identify low-quality news sources and inaccurate news posts: Crowd ratings are strongly correlated with fact-checker ratings across a variety of studies using different designs, stimulus sets, and subject pools. We connect these experimental findings with recent attempts to deploy crowdsourced fact-checking in the field, and we close with recommendations and future directions for translating crowdsourced ratings into effective interventions.


Asunto(s)
Colaboración de las Masas , Medios de Comunicación Sociales , Humanos , Comunicación , Juicio
19.
Conserv Biol ; 38(1): e14161, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37551776

RESUMEN

Citizen science plays a crucial role in helping monitor biodiversity and inform conservation. With the widespread use of smartphones, many people share biodiversity information on social media, but this information is still not widely used in conservation. Focusing on Bangladesh, a tropical megadiverse and mega-populated country, we examined the importance of social media records in conservation decision-making. We collated species distribution records for birds and butterflies from Facebook and Global Biodiversity Information Facility (GBIF), grouped them into GBIF-only and combined GBIF and Facebook data, and investigated the differences in identifying critical conservation areas. Adding Facebook data to GBIF data improved the accuracy of systematic conservation planning assessments by identifying additional important conservation areas in the northwest, southeast, and central parts of Bangladesh, extending priority conservation areas by 4,000-10,000 km2 . Community efforts are needed to drive the implementation of the ambitious Kunming-Montreal Global Biodiversity Framework targets, especially in megadiverse tropical countries with a lack of reliable and up-to-date species distribution data. We highlight that conservation planning can be enhanced by including available data gathered from social media platforms.


Registros de las redes sociales para guiar la planeación de la conservación Resumen La ciencia ciudadana es importante para monitorear la biodiversidad e informar la conservación. Con el creciente uso de los teléfonos inteligentes, muchas personas comparten información de la biodiversidad en redes sociales, pero todavía no se usa ampliamente en la conservación. Analizamos la importancia de los registros de las redes sociales para las decisiones de conservación enfocados en Bangladesh, un país tropical megadiverso y mega poblado. Cotejamos los registros de distribución de especies de aves y mariposas en Facebook y Global Biodiversity Information Facility (GBIF), las agrupamos en datos sólo de GBIF o datos combinados de Facebook y GBIF e investigamos las diferencias en la identificación de las áreas de conservación críticas. La combinación de los datos de Facebook con los de GBIF mejoró la precisión de las evaluaciones de la planeación de la conservación sistemática al identificar otras áreas importantes de conservación en el noroeste, sureste y centro de Bangladesh, extendiendo así las áreas prioritarias de conservación en unos 4,000-10,000 km2 . Se requieren esfuerzos comunitarios para impulsar la implementación de los objetivos ambiciosos del Marco Global de Biodiversidad Kunming-Montreal, especialmente en países tropicales que carecen de datos confiables y actuales sobre la distribución de las especies. Destacamos que la planeación de la conservación puede mejorarse si se incluye información tomada de las redes sociales.


Asunto(s)
Mariposas Diurnas , Medios de Comunicación Sociales , Humanos , Animales , Conservación de los Recursos Naturales , Biodiversidad , Aves
20.
Accid Anal Prev ; 195: 107406, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38091886

RESUMEN

Non-recurrent traffic congestion arising from traffic incidents is unpredictable but should be addressed efficiently to mitigate its adverse impacts on safety and travel time reliability. Numerous studies have been conducted about incident clearance time, while the recovery time, due to the limitations of data collection, is often inadvertently neglected in assessing incident-induced duration (i.e., the time from incident occurrence to the normal flow of traffic). Overlooking the recovery time is likely to underestimate the total incident-induced impact. Furthermore, the spatiotemporal heterogeneity of observed factors is not adequately captured in incident duration models. To address these gaps, this study specifically investigated traffic crashes as they reflect safety issues and are the primary cause of non-recurrent congestion. The emerging crowdsourced traffic reports were harnessed to estimate crash recovery time, which can complement the blind zone of fixed detectors. A geographically and temporally weighted proportional hazard (GWTPH) model was developed to untangle factors associated with the interval-censored crash duration. The results show that the GWTPH model outperforms the global model in goodness-of-fit. Many factors present a spatiotemporally heterogeneous effect. For example, the global model merely revealed that deploying dynamic message signs (DMS) shortened the crash time to normal. Notably, the GWTPH model highlights an average reduction of 32.8% with a standard deviation of 31% in time to normal. The study's findings and application of new spatiotemporal techniques are valuable for practitioners to localize strategies for incident management. For instance, deploying DMS can be very helpful in corridors when incidents happen, especially during peak hours.


Asunto(s)
Accidentes de Tránsito , Colaboración de las Masas , Humanos , Reproducibilidad de los Resultados , Factores de Tiempo , Modelos de Riesgos Proporcionales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...